The Real Test: Humanity and [Artificial] Intelligence in Alex Garland’s Ex Machina

By Michael W. Harris

Yes, what will happen?

If there is one question left in my brain at the end of Ex Machina, it is “who was the true villain of the film?” For so much of its runtime we are left in a state of unease at the actions and personality of its erstwhile genius creator Nathan (Oscar Issac)—some sort Steve Jobs crossed with Mark Zuckerberg crossed with Dr. Frankenstein mad scientist—and we wonder when the other shoe will drop. Nathan is erratic, quick to anger and just as quick to soften; unpredictable, clearly an alcoholic, and also paranoid. His security measures prove to be his very undoing, and also cause the death of his unwitting test subject/examiner, Caleb (Domhnall Gleeson), one of Nathan’s employees who is there to perform a Turing Test on Ava (Alicia Vikander), Nathan’s android creation.

There is not a lot of set-up to the film—we are quickly dumped into the beginnings of the story which is slowly unwound for us via dialogue—which works because Caleb is just as clueless as the audience. Nathan, on the outside, would seem to be the picture of the cool, laid back, Silicon Valley billionaire. A brilliant, youthful genius whose ambition is outpaced only by his reckless and odd behavior.

And it is all these odd mannerisms of Nathan, and his heavy drinking, which continues to ratchet up the tension in Ex Machina. Beyond the rising tension of its actual plot—the testing of what is potentially the first true artificial intelligence—is our creeping unease about Nathan. The way in which he will latch onto an odd quote from Caleb and run with it. How he will openly compare himself to the gods. His mistreatment of the one other person at the house, the servant Kyoko. As these behaviors pile up, the audience is kept in a constant state of nervousness, wondering when, not if, Nathan will snap and kill Caleb.

Which is why when, in the end, it turns out that Ava has been manipulating everyone and kills Nathan and leaves Caleb to die slowly, the audience is left to ask: who was the real villain of the film?

*          *          *

I am ashamed to say that I first watched Ex Machina only long after it had been released to home video, to my credit, though, I was in good company when I did finally watch it. It was during the visit of a dear friend that I finally pulled the blu-ray I had purchased off the shelf and popped it in. Also viewed during that visit were Sicario (directed by Arrival director Denis Villeneuve) and Hell or High Water (written and directed by Tyler Sheridan, who also wrote Sicario). My friend and I had a good time discussing and dissecting all three films, though Ex Machina was certainly an outlier among the dusty scenery of the other two.

Watching Ex Machina again I was struck by how much it felt like an episode of Black Mirror, the British, now Netflix, sci-fi anthology series. It doesn’t recall any one episode of the series, but rather the overall visual and plot aesthetic of the show. If you were to tell me that Ex Machina were actually an episode of the show, I would completely believe you. It doesn’t hurt that lead actor Domhnall Gleeson appeared in a second season episode of the show, and composers Ben Salisbury and Geoff Barrow scored an episode from season three.

But it is more than that. Black Mirror, which has released 19 episodes to date and began airing in 2011, has cultivated a fairly consistent look of future technology that is based upon what we have right now and where it could logically go in the near future. The “black mirror” of the title is a reference to our phones and other device screens that, when they are turned off, look like a black mirror. The shows takes these screens and the technology they represent, and turns them into a looking glass for where our society is heading. And in the course of the series it has cultivated a very clean visual aesthetic for where the design of these devices will lead. The trend of minimizing the bezel around the screens is taken to its logical end of eliminating it almost entirely (something already seen in current phones and computers), and screens for televisions have similarly grown and will sometimes take up entire walls. To say nothing of the modern interior design aesthetic.

In all, Black Mirror, like Ex Machina, cultivates a sense of the “near-ish” future, the “twenty minutes from now” approach to sci-fi as opposed to grand fantasies of Star Trek or the “used” future of Star Wars. This is quite similar to the story settings seen in Arrival and Sunshine, and also like those films and episodes of Black Mirror, it tells a very contained, personal story. There are a limited number of characters and the story plays out on a small scale with a very limited view of the wider world (though there are exceptions to this in Black Mirror, especially in some of the earliest episodes). But, very much like episodes of Black Mirror, Ex Machina explores a single idea and examine how it can spin out to a logical, if extreme, end, and in the process challenge our views of modern society and its trends.

However, it is the visual look of Ex Machina that keep bringing me back to Black Mirror, which, in so many ways, is part of why that show also haunts me and why it took me so long to make my way through its relatively small number of episodes (right around a year). Their vision of the future looks so probable that you can’t help but take its message about where we are headed more seriously. The clean visual lines mirror the design aesthetic of Apple products—seen writ large in its new Cupertino, California, space-ship like headquarters—which has in turned influenced most other device manufactures, including the Microsoft Surface that I am writing this on. They present a plausible look for the future and thus the viewer spends less time thinking about how “weird” or “different” the future is and rather focuses on the “oh my god, where are we heading?” questions and dilemmas it poses. And Ex Machina takes the exact same approach. We are so close to its future that it might as well be happening right now.

Hell, it could be happening right now for all we know.

*          *          *

The mytserious Kyoko and Ava…

So what are those questions and how can it help us to decipher who is the true villain of the piece? The more one pokes at the plot of Ex Machina, the more it plays like a retelling of Frankenstein, with Nathan as the doctor and Ava as his monster. But is Ava as innocent as the doctor’s creation in that tale, or does the twist of Nathan trying to create intelligence, rather than simply life, change the nature of how we think of creator and creation? Frankenstein is truly about a man aspiring to steal power from the gods, stated explicitly in the novel’s subtitle of The Modern Prometheus, and Ex Machina underscores this point during Nathan and Caleb’s early conversation in which the nature of Caleb’s visit to Nathan’s estate is laid out:

Nathan: Over the next few days you’re going to be the human component in a Turing test.
Caleb: Holy shit!
Nathan: Yeah, that’s right, Caleb. You got it. Because if the test is passed, you are dead center of the greatest scientific event in the history of man.
Caleb: If you’ve created a conscious machine, it’s not the history of man. That’s the history of gods.

It is also implied throughout the film that Nathan himself buys into this idea of him as a god. He even named his robot after the fictional first woman: Ava. Ava, Eve. It is a not so subtle nod.

But for all of Nathan’s grandiose statements about creating intelligence, he also is quick to dismiss any questions of ethics and responsibility. He says that creating Ava was less a decision as much as it was an evolution. It was the next step, the logical outcome of what was already happening, and that if it wasn’t him it would be someone else.

But Ava was not the first, as is revealed. She was simply the most recent, and most successful, iteration in Nathan’s experiments. She was the next evolution in his attempts to create AI, and no more or less important in the grand scheme than any of the others. If she fails, then he will just tweak the code, try and fix whatever was wrong, and start over. Nathan, for all of his delusions of grandeur, is still a scientist looking at the macro-level picture.

Caleb, however, looks at the micro, the singular. Ava. To him, Ava has passed the test and he puts in motion a plan to free Ava from her prison because Nathan plans to start over since it is not clear to him if Ava is actual AI or if she is just imitating it—the base level question of simulating intelligence versus actual intelligence. Caleb sums up the question as: “you can play [a chess computer] to find out if it makes good moves, but that won’t tell you if it knows that it’s playing chess. And it won’t tell you if it knows what chess is.”

Truly, whether or not Nathan is the villain of the piece really comes down to the question of if you believe artificial intelligence has equal standing morally, ethically, and legally to human life—a question that Star Trek: The Next Generation probed in the second season episode “The Measure of a Man.” Does Ava represent not only AI but also intelligence with equal claims to “life, liberty, and the pursuit of happiness” that humanity does? And to muddy those intellectual waters, would Alexa or Siri or whatever comes next also have similar claims to autonomy if they were actual AI and not programs with a human voice? (This question was probed during Black Mirror’s Christmas special.)

Nathan, judging by his actions throughout the film, does not seem to believe that Ava, or any of her forebears, are living beings with rights. Nathan routinely abuses them, treats them as sexual slaves, and “kills” them in order to build the next generation, all without a hint of empathy. It is meant to unsettle the audience when it is revealed, and his actions cast a mirror upon ourselves when we think about how we might treat such artificial life. Even more creepily, Nathan preserves the bodies of the earlier models in cabinets, on display like trophies or museum pieces. He literally has skeletons in his closet.

*          *          *

Ava is so much a pawn during most of Ex Machina that it is hard to consider her a villain, even if she does kill two people. She is manipulated by Nathan seemingly at every turn, and it is implied that Nathan designed her to be sexually appealing to Caleb based on his “pornography profile.” When Caleb confronts him about this, Nathan flippantly responds, “Hey, if a search engine’s good for anything, right?”

Ava’s very existence is visually manipulated, just as the very test that Caleb is there to perform, the Turing Test, is not performed correctly. In a true test, the examiner does not “see” the subject, but rather interacts via a computer screen so that all he can do is read the text. Nathan, though, has a ready answer to this criticism: “We’re way past that. If I hid Ava from you so you could just hear her voice, she would pass for human. The real test is to show you that she’s a robot and then see if you still feel she has consciousness.” In this way, the test was more about Caleb than Ava. Nathan wanted to judge Caleb’s reactions. The true reason that Nathan chose Caleb was that he was a good kid, had no family, was smart, and could be easily manipulated by a strong parental figure who he was eager to win approval from. Nathan was manipulative, but Ava was possibly even better. She took advantage of every little thing given her in order to escape, but was she a villain?

Perhaps the biggest question hanging over Ava, though, is the issue of androids and sexuality. By giving his android a feminine identity, Nathan was clearly setting up a situation for sexual attraction by Caleb, and he also programmed Ava to exhibit attraction towards men. Caleb poses this question to Nathan, which again leads to a seemingly ready response, like Nathan had been expecting it:

Caleb: Why did you give her sexuality? An AI doesn’t need a gender. She could have been a grey box.
Nathan: Actually I don’t think that’s true. Can you give an example of consciousness at any level, human or animal, that exists without a sexual dimension?
Caleb: They have sexuality as an evolutionary reproductive need.
Nathan: What imperative does a grey box have to interact with another grey box? Can consciousness exist without interaction? Anyway, sexuality is fun, man. If you’re gonna exist, why not enjoy it? You want to remove the chance of her falling in love and fucking? And the answer to your real question, you bet she can fuck.
Caleb: What?
Nathan: In between her legs, there’s an opening, with a concentration of sensors. You engage them in the right way, creates a pleasure response. So if you wanted to screw her, mechanically speaking, you could. And she’d enjoy it.
Caleb: That wasn’t my real question.
Nathan: Oh, okay. Sorry.

The trope of the female android, sometimes called a gynoid—the word itself is an immediate other-ing of the feminine in a creation that only has gender imposed upon it by an outside force—is part of a longer trope in science fiction, especially of equating female androids with dolls or some other child-like creations in need of male protection. However, Ava does end with agency of her own, she has had some power all along (represented literally in that she can overload the power grid and gain some limited autonomy), and it is that which allows her to escape her prison and explore the world. But in doing so she kills Nathan and leaves poor, hapless Caleb to die.

So in many ways, Ava’s actions are completely justified, especially given Nathan’s previous abuse of the earlier androids that he created. And the overall question of male abuse of those who hold very little power, especially in our current era of reckoning of male abuses of power and authority, are quite intriguing. Ava is assumed to be powerless, controlled, and in the end knows that if she “fails” the test that she will be deactivated, essentially killed. (In this way, her desire to escape and live is not unlike the robot “Johnny 5” in Short Circuit, though much more chilling.) And if Ava had only killed Nathan, and if the film had ended with Caleb’s fate being more certain that he might live, I would probably be more comfortable labeling Nathan as the clear villain. But what about Caleb? He was Ava’s unwitting accomplice, manipulated by her almost as much as he was used by Nathan. If there was an actual innocent victim, beyond Nathan’s abused predecessors to Ava, it is Caleb, and the way in which he is treated by Ava was cold and calculated. Villain-esque.

Ava may have been justified in her killing of Nathan, and making sure Caleb was trapped and couldn’t stop her escape was also a move of self-preservation (after she killed Nathan, Caleb would have certainly tried to stop or otherwise deactivate her). And I probably would have been a bit angry with the film if both Caleb and Ava escaped and rode the helicopter into the sunset. So I think the unease that I feel with the end of the film was a calculated move by Garland to force the audience to meditate on this question of villain. How do I feel about Ava’s actions? Because, if I grant Ava intelligence then I feel like she should be held accountable like any other human—which for the record is the side of the argument that I fall on. However, if I don’t think she is conscious then Ava represents the dangers of technology run amuck.

In many ways, Ex Machina presents us with the same basic dilemma surrounding AI we have been wrestling with since 2001: A Space Odyssey. Hal was acting out of self-preservation, as is Ava, as was Skynet, and isn’t that a response that all living beings share? Wouldn’t that mean that she has consciousness? So then the question becomes, does she feel bad about it? Does she have empathy? And if so, would that, then, make her human?

*          *          *

Nathan’s got the whole world in his hands…or at least a synthetic brain.

Ava is an “other” in human form, and she is further other-ed from the male castmates by possessing female traits. Indeed, the vast majority of women on screen in Ex Machina are not humans at all (the exception being co-workers of Caleb’s seen in the very first shot of the film). It is also strongly hinted that Caleb is not only single, but also might be your stereotypical nerd who is not the most natural at interacting with women (or any humans really), while Nathan can been seen as having traits associated with toxic masculinity.

Perhaps there is a third option, though, as to if a character is hero or villain. Maybe the question isn’t who is a villain, but rather who isn’t a villain? Caleb certainly comes the closest but we must question if his actions to “save” Ava are being performed for altruistic reasons or rather out of some base masculine need to “protect” the damsel in distress. Granted the “damsel” really was not in need of saving, and Caleb is simply an easily manipulated dupe.

In the end, the lingering questions Ex Machina are what gives it its power. Where Terminator or 2001 or most other depictions of AI as the other pose easy narratives of who we should identify with, Garland’s film challenges us with the essence of what we need to be discussing: what are our criteria for the “intelligence” portion of AI and how do we treat it on an emotional and legal level, even if it is “artificial.” What responsibilities and rights are we obligated to attach to it? This is a thread picked up in various depictions of the future seen in Star Trek and Blade Runner, the question of “what makes us human?” But in showing us the moment of conception, Ex Machina thrusts the question into the moment in which we encounter the Other for the first time, a moment that is probably closer than any of us care to imagine.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.